Transcript from Tim Urban AMA: Big History, Emergence, and the Future: From the Big Bang to AI and Civilization’s Stakes. A recorded conversation with Tim Urban about Big History, Big Futures, and the Battle for Our Better Minds Hosted by Jordan Myska Allen for UpTrust
The Forthcoming Book: Big Bang to Heat Death
Jordan: Very happy to have you here. So the forthcoming book—I'm excited. We have fun conversations all the time, but I'm especially excited to talk about this. It's about big history and big future. What would you say it's about?
Tim: Basically, it starts at the Big Bang, where I try to explain inflationary theory—which is extremely confusing. And then I go through the formation of the universe, the formation of the Earth, the origin of life and evolution, human evolution, ancient human history, modern human history, and then we hit the future. There's a bunch of cutting-edge stuff going on right now, and then where it might be going in 50, 100, 200 years. And then we go into the far, far future and end at the end of the universe—the heat death of the universe, which is really far away.
Jordan: What's the through line? Are you looking at free energy? What do you think?
Tim: I talk about energy, but I think the book you're referring to is David Christian's Origin Story, which we've talked about. Amazing book. The through line here is more about the big game-changing moments—essentially for life. If you go to energy, you can zoom out more and talk about even pre-life, and life is a step along the way. But within the part of the book from the origin of life to the end of humans—which is about 90% of the book—it's about: if we zoom way out, what do we see?
We might see the origin of life as obviously a major step. Going from single-cell to multicellular—major step. Maybe photosynthesis, or sexual reproduction, or aerobic respiration, or one of these things—these are leaps, but maybe not mega-leaps. Going to multicellular is suddenly a mega-leap where the whole game changes. Now you have animals competing on a totally different playing field, a totally different game than the single-cell world.
People can quibble with that—some would say going from archaea to eukaryotic cells is an even bigger deal. I don't know, but I'm calling multicellularity the first major leap after the origin of life. And then when I get into the human story, I think it helps to think about human history and the future through that zoomed-out lens. You might say that the origin of the ability for human brains to connect into a super-brain is actually the next one—maybe right after multicellular life—which happened about 50,000 years ago. And then maybe AI is another one, happening this decade or whatever.
There's a lot of just trying to put things in perspective. Big history goes well with a discussion of the future because it orients you to just: what's even happening here? What's even going on? What is life? What are you even doing? When you do that, you can start to see the meaning of what's happening now a little more easily.
Zooming In: Cells, Atoms, and the Question of Life
Jordan: This is the thing I'm so interested in, because it changes you. I had an experience hiking with my son strapped on my chest when he was about six or nine months old, walking on lava rock, and I was like, "Holy shit—this rock finally got up and learned to walk, and it's happening right here, right now, with me and my son." There's this felt connection to the whole evolutionary sequence. What has changed for you as you've done this research? You've always zoomed way out, but now you're also parenting a toddler, so you're zooming way in and way out. How has your view changed?
Tim: You can also get a lot of mind-blowing reality checks from zooming in—literally to the smallest units—and zooming out to the biggest. Size-wise, time-wise, it helps.
Zooming out on space is the more common, cliché thing. "Oh my God, we're so insignificant. If the sun is a basketball in New York, the next star is like a golf ball in Warsaw, Poland. Oh my God, so much space, we're so tiny." Okay. But zooming in is crazy. You think, okay, I'm this organism, but I'm made of cells that aren't nearly as smart as I am. They have a lot going on, but they can't do the things I can do. It's this emergent property—I'm just a bunch of cells. That's all I am. It's not like there's me and then I have cells. No, I am cells, a pile of cells. And somehow I have consciousness, the ability to reason and do all these things.
But at least a cell is a living thing—it's the smallest unit we call life. What gets weird is when you zoom in further. What's a cell made of? Organelles, plasma. What are those made of? You keep zooming in and you see these little protein micro-machines that are twisting and turning and clamping and siphoning. It's wild, incredibly interesting—but the proteins aren't alive. They're just atoms moving because of electrical charge. So that's actually all I am: a bunch of non-living protein micro-machines. You can zoom in further—protein micro-machines are just atoms that have very little going on. They just obey electric charge. So do the micro-machines. So do the cells. So do I. It gets into free-will questions and also just: what is life? I'm made of a bunch of little things that aren't alive, so what's going on?
The book does lots of stuff like that. If I have a thought process I find interesting, I just put it in the book, because it's fun.
Jordan: I think about this a lot. You're talking about these protein micro-machines and atoms—we act like this is the standard empirical, scientific-materialist view. But protein micro-machines and atoms behave super differently from this table my stuff is sitting on. I almost think of it as proto-matter—the stuff these things are made of. What about proto-life? Is there just a switch where things are dead and then suddenly alive? Or is there a spectrum?
Tim: I think it's a spectrum. The first proto-cells—because a cell can form without life. A cell membrane is just what lipids do when they're in a certain configuration; they'll form a membrane. When that happens, and inside of that are some maybe primitive RNA molecules—which come together either as a freak thing or commonly, we don't really know, just electrical charges pulling things into certain configurations—but when one has the property of being able to self-replicate, and it tends to because of electrical charges, and it happens to be wrapped in a proto-cell membrane, and things start passing through the pretty loose early membranes... before you know it, it just keeps going and one day you're like, "Oh my God, that's a real cell. I guess there's life."
Maybe the RNA itself we could call life. Different people have different definitions. What's less interesting to me is the semantic moment, because clearly the proto-stuff is doing the same thing as life. Once something starts self-replicating and therefore undergoing Darwinian pressure—because the thing that self-replicates badly just peters out and the molecules drift away, while the things that replicate well have more offspring—evolution kicks in as soon as you have self-replication with heredity. So the parent thing makes a copy of itself, not something totally different, and then the best copies will persist. You could call it life, but some people say it's not until you get to a full cell. I don't know. But it's weird, because the actual things are just atoms that have no life—they're no different than the atoms in the table.
When you look at the protein micro-machines, it's shockingly intricate—better than anything we could create by far. That's just billions of years of evolution. And when you talk about your kid—I now have a toddler and a baby, and I'm holding my baby up and I'm just like, so many billions of years of evolution had to do its thing to produce this miracle. This fat baby I'm holding. She doesn't know anything. She'll grow up and whine and complain because she doesn't want to eat the food—but she's a miracle. She's an unbelievable engineering miracle.
Jordan: For all we know, it took exactly 13.8 billion years. If we're the only life—that's just how long it took the entire history of the universe.
Tim: We don't know. It might be common and happens all over the universe—in which case it's not a miracle that it happened, but it's still incredible. Or life is unbelievably rare, and we're literally alone—in which case it's the most freak accident, a one-in-a-quadrillion thing. And then once that started, you end up with the whole biosphere. We don't know.
The Awe of Being Alive
Jordan: When you talk about this stuff, you get worked up into a kind of state of awe.
Tim: Oh yeah. There's so much to blow your mind if you just think about it for a second.
Jordan: I think this is something that's uniquely Tim and freaking great. People love your explanations partly because you put things in super unique ways, but also because you love being in awe at the world.
Tim: To look around at the society you're in—the streets, the buildings, these incredible systems of finance and law, all this intricate meta-thing we've built—what the hell is going on? We're a bunch of primates that until recently were all living in the forest, building nothing, maybe building a tiny shelter or wrapping some fur around us. This is not normal. Civilization is completely bizarre. If you zoom out, until yesterday basically none of this existed. It just happened. What the hell?
I can also just get jaded like anyone else, but writing a book like this is helpful for me. When I want to write about something, I put on that lens of "what's actually going on here?" and suddenly I blow my own mind thinking about it.
Jordan: My mind is blown constantly by these things. Even if—let's say our universe is teeming with life, I know about the Fermi paradox and everything—but just for a moment, let's say it's not. And I look outside and there's a truck. And I'm like, 14 billion years of evolution created a truck.
Tim: Yeah.
Jordan: The truck is the cutting edge of the universe.
Tim: And what if this is the only planet in the whole universe that has something like a truck? But it's just as crazy if it's everywhere. What if there are equivalent alien trucks? They probably use wheels too—they're bound by the same laws of physics. We know that going from single-cell to multicellular probably isn't a great filter candidate, because it emerged independently dozens of times. So things that emerged independently here on lots of different continents—if there are lots of aliens, we can probably expect to see some of those things there too. Maybe on billions of alien planets there are literally trillions of trucks. Swimming pools. Vacations going on. Students studying astronomy and looking at the Milky Way Galaxy with a different name for it. Aliens having dance parties. Aliens getting drunk. It just looks like this quiet, dead spiral to us.
Sometimes when my wife is watching the Great British Baking Show and they're all sitting there baking and talking in their British accents, I think: aliens in Andromeda are looking at this cold, dark, quiet, silent Milky Way galaxy, and this is going on inside it.
Jordan: It almost puts me in an altered state. I feel like I'm on drugs—in a dumb joy.
Tim: And all this is doing is just pointing out obvious things about reality. Things that are obvious once you think about them for a second. It's literally looking with clear eyes at reality and considering what it means, as opposed to being caught up in "I have a 4 PM meeting and I've got to pick up groceries"—just pausing and lifting your head up for a second. Essentially being on drugs.
Jordan: Kids are good for this too. I'm walking along and there are these tiny little flowers—basically weeds—covering a field. And for my kids, those are flowers. They're gorgeous. And then I'm like, "Flowers, man, that's crazy." For whatever reason, a property of the universe is that if you're flamboyant and bright, things are attracted to you and that's good for you. Beauty is this emergent property of the universe, and it's cross-species.
Tim: And likewise, when you see a sunset and think it's beautiful, or you see a beautiful person—that's billions of years in the making. That stuff runs deep. There's obviously a cultural aspect, but I think people find sunsets beautiful all over the world, regardless of culture.
Jordan: Or they find a mountain valley with a stream going through it beautiful all over the world, because it's baked into our DNA. That was a good survival landscape—high ground, a stream for water, lush greenery.
Tim: And if we'd survived better in flat, gray desert, we would think that was beautiful and think nothing of mountains and streams. It's all built from evolution.
Evolution, Morality, and the Strange Species We Are
Tim: This reminds me—you talk about things that have been around for a few hundred million years in our evolutionary stack. I was thinking about income inequality the other day. For hundreds of millions of years, it's been "to the victor go the spoils"—this structural, Darwinian cycle.
There are no rules, no fairness. Evolution is just run by physics, and physics doesn't care. A star blows up, goes supernova, destroys planets—the star does that because physics tells it to. Look at the animal world: basically the same thing. And then we're this weird species who has all that deeply baked into us, but also this capacity to do better—to think, wait a second, those people are just as valuable as us.
Jordan: Physics cares through us, but not until us.
Tim: It's weird. But the physics is generating this moral sense in us too. It's this very strange thing—the idea that those people are just as valuable as us, and it doesn't make sense that it would be okay to do something to them that we don't want them to do to us. No other species really does this. And I've gotten yelled at before for claiming this is uniquely human. People say animals have empathy. Maybe a little bit, but I don't see chimps really extending that to other tribes.
Jordan: On the flip side, for the people arguing against you—lots of humans still don't do it either.
Tim: Most humans, deep down—and that's the thing—we also build these structures of social pressure. I think a lot of people act good because it's rewarded: externally, people will think you're a good person; and internally, you'll feel better because you've had this moral code baked into you. But it doesn't always mean you actually feel these things.
What I call the "primitive mind" is just: survive, reproduce, acquire resources, have power over your environment, protect your young. That runs deep—the chimp feels the same way. And then we build this entire civilization with all these social rules and pressures. And sometimes those rules conflict with or override the primitive impulses. Sometimes they don't—ambition and personal wealth can enhance society. But we do want to create a society where sexual assault is evil.
I think society is just a weird, amazing thing. Walking around the airport, seeing all these people with their clothes on, all behaving under the same moral codes—they bump you and everyone says, "Oh, sorry about that." We're all actors on this stage.
Jordan: I think about that when I run into someone hiking in the greenbelt in Austin, and I'm like, man, if this was only a few hundred years ago, I'd probably be thinking, "Should I kill this person?"
Tim: Totally. Or run. But you can trust it because society runs deep through almost everyone. You can't trust everyone—that's why there are criminals. But you can basically trust almost everyone, at least in a very high-trust society like ours. There are societies with a lot less trust, and I don't think that's because those people are inherently less trusting—I think the ideology currently underlying their society doesn't bake in the trust as well.
Social Media: From Humming Place to Candy Store
Jordan: So this takes me to UpTrust. The primitive mind is the thing social media optimized to reward, and we want to do something different. You're a public persona, you write—how do you keep from getting audience-captured, from having the primitive mind's reward signal drive you in a direction you don't want?
Tim: My job is easier than yours in that way. If you're a solo blogger, and early on you think, "I need to get big, so I need to do stuff that's popular even though I don't like it"—that's when you run into the problem. Because the audience you attract likes the thing you're doing, not the thing you really wanted to do. And if you try to shift, your audience says, "This isn't what I'm here for. You've changed."
I think a lot of solo creators today do what I did, which is from the beginning, furiously do what you actually want to do. When you do that, you attract people who tend to find the same things interesting. So when I learned about AI and said, "Oh, I need to write about this," people didn't say, "This isn't what you do," because I'd already made it clear I do lots of different things. If I was interested in it, turns out they were too.
You have the hardest job in the world, though. These social media companies—at least the first round—were like going down the junk food aisle. Cinnamon Toast Crunch, Lucky Charms, the candy aisle. Companies trying to surpass your higher mind and sell directly to the primitive mind. Wake it up and say, "I want that," and the primitive mind takes over. They make billions because primitive minds are easy to trick.
The very early social media was actually quite high-minded. This is Jonathan Haidt's research and Jon Ronson's too—his great book on this and his TED talk. Before 2008 or so, before the like button, before retweet really took hold, it was much more friendly. You would almost admit stuff that was embarrassing, and people would say, "Oh my God, I do that too." Ronson calls it a "humming place"—the opposite of what it's become.
Then you have these new features, and suddenly you're incentivizing totally different behavior. The algorithm starts favoring traffic and virality. What makes a post go viral is outrage, or really catering to a political tribe. In this one little experiment, the best of humanity comes out with the right system, the right incentives—and everyone loves being there. Tweak a few settings and suddenly the primitive minds wake up, because there are all these new candy wrappers, and you end up with a pretty shitty environment.
It's not all bad. I follow a lot of awesome accounts on X that are totally high-minded—funny, talking about science, history, whatever. But you end up on the For You page and you're going to see sensationalist, bombastic stuff designed to make you emotional in a not-great way.
I'll give X credit for trying stuff. They came up with Community Notes. They have the Grok button next to every post where you can get a summary or check if it's fake. These are modern tools. But they have so much baggage with the culture that's been built there. What I don't like is when I'm on X and I start to feel like, "Wow, on this really controversial topic, everyone agrees with me." I wrote a book on this—I'm aware enough to say I'm in an algorithmic bubble, and it makes me want to leave the site. A lot of people don't even realize it's happening. They just think, "See, everyone agrees with me. I'm so right." That's all false confidence.
Your job is so hard because you have to reinvent health food in a world that's evolved past it. But we need it so badly.
What's Our Problem: The Super-Brain and the Culture War
Jordan: When you wrote What's Our Problem?, you were saying this is the biggest issue facing civilization. Has that changed as you've zoomed out?
Tim: It's not that the culture war itself is the biggest issue. It's that the issues that will determine our future—AI governance, bioweapons and how we protect against them, what we do with genetic engineering as it gets better—these could take us anywhere from full utopia to full extinction. The range is that big, in my opinion.
The reason I focused on how politics makes smart people dumb is that our societies weren't built by individual humans. Individual humans just aren't that smart. Put a human alone in the forest next to a chimp alone in the forest—I might bet on the chimp to survive. Society was built by this collective human super-organism that happens when our brains connect together. That's why we have all the science and tech and magical knowledge, and why it's moved so quickly.
In certain environments that encourage high-minded discourse and a culture of disagreement, the super-brain is smarter than any one human. When mob mentality and tribalism take over, the super-brain basically shuts off and you end up with these big stomping giants trying to cudgel each other. That's what the macro emergent thing becomes.
The smart super-organism can do anything. I have full faith that it can get us safely to the next chapter. It doesn't mean it can create AI alignment, but it can figure out if it can, and it can figure out if it's a fool's errand right now and then stop. It can make wise decisions. The dumb giants will drive us right off a cliff as a species. They have no foresight. It was bad diplomats and tribal thinking that started World War I. These things are dumb. They can't see two feet in front of their face.
So the culture war to me is not a battle of left and right. It's a battle of what I call low-rung—this tribal, mobish culture that emerges from our primitive minds banding together—versus the higher mind that's in all our heads trying to wrest control back. How that goes will determine whether the big brain gets us to the future safely, or we drive off a cliff.
Jordan: It reminds me—second time I want to plug Ken Wilber's Integral theory to you. He makes a similar argument in a more complicated way.
Tim: A Theory of Everything—is that the one I should read? Everyone recommends this to me.
Jordan: It's a good one to start. Basically, he adds a couple of layers to the same model. Instead of low and high, he has egocentric, ethnocentric, and world-centric thinking, and how they're each trying to dominate. Then he adds one that's pluralistic, which is also fighting against the rational/objective. It's the kind of Stephen Pinker very-objective worldview versus someone else saying, "But what about all these other perspectives?" I don't want to go too deep on the integral rant.
Tim: It's really interesting. I think we need as many people as possible trying to frame this kind of thing so it becomes common, something to think about. Because the first step is awareness—just like if you want a healthier society, step one is educating people on nutrition. Getting people to understand that when a package says "low fat" and "heart healthy," that's not necessarily good for you. Build awareness, and then behavior can change.
My mom said in the 1950s they all ate white bread all the time because it said "enriched" on it and everyone thought it was healthy. There's that early phase when you can fool people before they catch on. With nutrition, it's not going to end our civilization. But if tribalism is one of those cyclical moral panics that every few generations bring, and we're in one now—because of the technology at play, we might not have the luxury of making mistakes and then wising up. The stakes are so high that if we fall into this at this particular time, we might just drive ourselves off a cliff.
The Law of Mad Science
Jordan: There's this law of mad science—it takes high-minded thinking to build the science, pass it on, and eventually build nuclear weapons and AI.
Tim: But once the artifacts are built, it doesn't require that level of high-mindedness to use them. The low-minded entity can take advantage and wreak havoc.
Jordan: You have all these stories—the Bronze Age Collapse, the fall of the Roman Empire, every empire in history. What takes centuries to build can be literally burned down.
Tim: The ultimate example is a library that's centuries in the making, burned down in one hour.
Jordan: And we see this in our toddlers too. We spend hours building a Lego thing and they're like—
Tim: Yes! "Can you build it again?"
Jordan: Exactly.
Tim: That's why we don't want our super-brains behaving like toddlers. Culture war makes us into a macro toddler—unimpressive, no foresight, no understanding of what it's doing. In the past, at least when a civilization like Carthage was burned to the ground, the species persisted and the planet was okay. Now, if we do this wrong, it could be the end for all humans.
And even without the existential risks—even without AI or nukes—just a coup that installs a totalitarian dictator would be awful. It's been so good for so long that we think this is normal, but this is only how things are because people act like adults, generation after generation, to uphold it. It's like any system: stop maintaining it for a month and it deteriorates. The water runs and the heat turns on and the streetlights work only because there's a constant frenzy of people maintaining it all. The default in the universe is entropy. I just want people to feel a little more fear. Thank God for this society—let's do everything we can to maintain it.
Tribalism: High-Rung vs. Low-Rung
Tim: There are different forms of tribalism. The bad kind dehumanizes the other, enforces conformity within your ranks, doesn't change its mind, is rigid and small-minded and shortsighted—beating the other team is all it thinks about. I don't think anything good comes from that. Maybe in a very rare case—like toppling a totalitarian dictator—you want people in that mode. But in a society like modern America, nothing good comes from it.
The high-minded kind says: these aren't evil people. They're not subhuman. They're caught up in something bad. The enemy is the mind virus creating this behavior, not the people. And you continue to challenge your own beliefs, surround yourself with disagreement, and be willing to change your mind.
The telltale sign of bad tribalism is rank hypocrisy. You had the exact opposite reaction when it was the other people doing the same thing. High-rung thinking sticks with principles—if your group starts betraying those principles, it's easy to say, "They're not my people anymore." Low-rung thinking doesn't care about principles. All that matters is the tribe.
Jordan: This ties into a question that came in about people who get really successful and then get surrounded by people who don't challenge them. I've seen this happen to a lot of self-help people, gurus, spiritual leaders—the only people they attract are people who think like them, and they end up in really subtle hypocrisy.
Tim: It's like a helium balloon. I have this graph I like to use: conviction on one axis, knowledge on the other. You want to be on the dotted line—your conviction should match how much you actually know. When you're in an echo chamber, you just drift up into unearned conviction. It's like junk food for the mind—the primitive mind just wants to identify with certain ideas and feel right. Just like you have to work hard not to eat too much sugar, you have to work hard not to drift into the arrogance zone where you have strongly felt, weakly supported beliefs.
If you surround yourself with people who disagree for sport, who love to call out bias and hypocrisy—that's like having a kitchen full of healthy food. I'm on a bunch of text threads where there's nothing my friends love more than catching me being biased. We do it to each other. When someone puts out an opinion about a big news event, someone else will just disagree for sport. Because when everyone agrees, it's boring. And that environment trains your mind to always pause: Am I being biased? Am I doing this?
In an echo chamber, no one calls anyone out on bias. No one can even see it.
Jordan: I saw this a lot during the pandemic. I'm surrounded by people with different views on Trump—some neutral, some positive, some negative. The people who were really negative couldn't understand how a smart, caring person could possibly vote for him. They literally couldn't see it, because they didn't talk to anybody who thought that way.
Tim: All they were seeing was the media-portrayed caricature—the worst version of the other side. The media sensationalizes the worst things those people do. And likewise for the other side.
What I like about my text threads is it's not that we always disagree. It might be five people who can't stand Trump and one who likes him. In a low-rung environment, that person wouldn't even say it—they'd get the cold shoulder, maybe ostracized. In a high-rung environment, it's almost extra fun being the one person who disagrees, because you're not going to make people angry at you personally. It's like, "Here's a fun game—okay, tell me why you like Trump. Try to convince me." People push back on the substance, but nobody's mad. It can be a heated argument, but it's not a fight.
I call the high-rung environment the "idea lab"—where ideas are treated like science experiments—and the low-rung one is the echo chamber.
Audience Capture, Conformity, and Independent Thinking
Tim: We evolved in environments where there was basically one leader and lots of followers. Most of us have this inclination to follow, to please, to fit in. That's not bad in a lot of situations—it can make you a great employee. But if that slider is up too high, you start conforming with really bad things that don't fit with your principles. You forget who you are and what your principles are, because you're entirely focused on being part of the group. That's not good. I don't think it makes you a happy person either—when you get to that level, you feel a deep lack of confidence.
Jordan: Especially in a modern world. It may be that if we lived in a time where conformity was the way, you'd have security and confidence. But we're in a much more complex world.
Tim: A long time ago, if you challenged the tribe, you'd be out and you'd starve. That's still baked into us, but it doesn't make sense today. We can be a little more courageous. A little more independent.
The Future: Orienting Before Opining
Jordan: You've done some really cool research on the future. How do you relate to it? Do you feel like you should be telling people what to do, or is your role more about naming things?
Tim: My role with the future—and it'll continue to be this way—is to play the role of a friend at a table with other friends, except I just spent three weeks reading about this topic that everyone should know about but doesn't yet. And now I can be like, "Here's the situation." Not to say, "Here's the answer," but—especially with the future, nobody knows the answers. There are a lot of very interesting, strong opinions that disagree with each other.
I don't think it's above my pay grade, but it's not the crux of what I'm writing. What's more valuable is the steelman: What do they think? What's the best version of what they think? Before that, even: what is this topic? What's the history of it? How did we get here? What are people scared of? What could go wrong? What could go right? Just orient people.
Especially with AI—that one's almost less needed because it's so in the news. But there are 20 different topics within biotech, energy, transportation, cosmology—all of these areas of the future being built right now. Explain what's going on, talk about the science and engineering, use some imagination—imagine what life could be like if things go well versus badly. I actually have fiction vignettes in the book to do that. And then: what do the skeptics say? What do the bullish people say?
And it should be fun along the way. My goal is that everything is either interesting or funny or both. If I can't hook people, it won't reach them. When I've succeeded, I think I've helped orient people—given them a clear mental model of something they didn't have before, so they can understand the news headlines better and pass that information forward. Awareness is step one. If people aren't oriented, nothing else matters. And the worst part is when people start having tribal opinions on something like AI before they're even oriented—they don't know what they're talking about, so they just follow what "their people" say. We don't want that. We want awareness first. Then opinions can start.
Predicting the Unpredictable
Jordan: It seems like we're in almost a Cambrian explosion of futurism. We really can't predict where things will be in a pretty short time.
Tim: When the internet started, were people really predicting Uber? When electricity first came around, did people predict movies, telecommunications, radio, TV? When the car came around, were people thinking about Walmart, suburbia, McDonald's? There are always good and bad unintended consequences. You can't connect the dots from early internet to Uber—you need smartphones, apps, four or five intermediate steps. And we're not good at connecting more than maybe one dot forward. In 2035, it might be seven dots ahead—you can't do that.
Thousands of smart people making predictions—someone will seem like a genius, but a broken clock is right twice a day. Predicting the future is worth it for playing out different scenarios so you have the proper level of "holy shit" about the stakes.
Jordan: With cars, for example—if you were in the 1880s and you could do the math on how much highway infrastructure would be needed, you'd say, "There's no way." And yet here we are.
Tim: People say right now, "There's no way you'll get a million people on Mars—you can't even breathe the atmosphere." "There's no way people will live to 500—nobody's lived past 120." Not all of those will come true. But I never say never to any of them. The thought experiment is: take George Washington here for a day and show him around. Or go back to the 1700s and describe today's world—they wouldn't believe you. They didn't even have electricity, so something like the internet or social media wouldn't even be comprehensible.
And we might see that level of progress in our lifetimes. We could go forward in a time machine to a later point in our own lives and be so blown away we wouldn't even understand it.
Parenting in a Rapidly Changing World
Jordan: How do you parent with this?
Tim: A toddler and a baby? Right now it's simple: I'm just spending time with them, loving them, making them laugh. Later, I want to encourage independent thinking, confidence in learning, and a habit of being willing to change your mind. High-rung thinking, basically.
The faster the world moves, the more important high-rung thinking becomes—for keeping yourself safe, for succeeding, for achieving what you want in a world that's different than it was three months ago. You need confident, independent reasoning skills. If all you can do is look at conventional wisdom and what people around you are doing, that's going to lag behind. It's always the independent thinkers who figure things out first.
I want them to be people who can independently come to conclusions, and then when conventional wisdom disagrees, to cautiously trust their own viewpoints—putting conventional wisdom in as a piece of information. "Okay, nobody else thinks this—maybe I'm wrong, I should keep looking." Not "Everyone's lying to me, they're all wrong, I'm smarter than them." But also not "Nobody else thinks this, so I must be wrong." Independent reasoning of a fairly intelligent person is so often smarter than conventional wisdom, which just lags and is slow.
I want them reasoning from first principles about what they should do from 18 to 22, not necessarily just doing what everyone else is doing.
Jordan: There's an interesting tension there. Humanity is so much smarter than any given human—partly because of conventions. But any individual needs to be able to think for themselves, while also drawing from the collective.
Tim: There's a balance. Your brain is doing a dance with conventional wisdom. Sometimes you're going to be smarter than it. Sometimes it's actually wise and you weren't listening. Try to get better at telling the difference, without having complete fealty to either one. But when in doubt, trust your own reasoning.
Initiation Ceremonies and Growing Up
Jordan: I think a lot about initiation ceremonies—how we don't have them and what happens when people don't have them.
Tim: What do you mean by initiation ceremonies?
Jordan: For most of history, tribes had really intense rituals. There's the classic one where you stick your hand in a glove of bullet ants. There are tribes where they'd lock young men in a hut for 30 days with deeply unpleasant rituals. Or the Navajo, where you'd take so much of a local psychoactive that you literally couldn't remember your own name, and then you'd get a new name and be a new person—an adult with different responsibilities.
I don't want my kids to go through anything like that, but there's a pattern: some meaningful landmark that says you're not a child anymore.
Tim: I think there are some nice things about the fact that we have 25-year-old overgrown children in our society. There's something nice about getting to be curious, exploratory, and immature for longer. I don't think we want every 13-year-old to have that dead-serious face from an 1880 photograph. Kids should explore and feel like kids until they're 18 to 22.
But there should be some moment when you start to say, "I need to be better than I was." Political tribalism—I want to encourage people to think of that as really silly, childish behavior. Once you hit your mid-twenties, it should be viewed as embarrassing. The way people might look down on a 28-year-old living in their parents' house and not making any money—I think a 28-year-old being super culture-warry on X or TikTok should seem like that. Right now, you have boomers, 40-year-olds, 30-year-olds throughout society acting like children. We have a lack of shame around it.
The Right Kind of Shame
Jordan: One thing I like about what you're saying is that you're not afraid of the power of shame. I think Brené Brown introduced a really helpful awareness of how shame can be toxic—the difference between guilt and shame. But people have taken that too far to say shame is always bad. I'm like, no—shame is really helpful for getting us out of narcissism.
Tim: I think there's a spectrum. On the far end, "I am a fundamentally worthless, bad person"—we don't want that. Move up the spectrum: "I am acting like a bad person. I'm better than this." Or even just embarrassment—the feeling of "I don't want people to know I was acting like this." I think we need that. When you go to full shame, where it's just you, period, there's no incentive to improve—it's not even a productive emotion.
Jordan: The best shame is actually a connection force. It brings you into contact with the larger world. The worst shame is deeply isolating.
Tim: Just like hypocrisy—someone says "Fuck X group of people," then someone else says the same thing about their own ingroup and they call that person a bigot. They should look at those two things and feel embarrassed. It should feel like your pants fell down in public. Within an echo chamber, there's no mechanism for that negative feedback. And this applies to all kinds of things—how you treat your significant other, your kids, how you are at work, how you treat your own body.
The Stakes and the Mission
Jordan: It makes sense you keep bringing it back to this. It's deep in the heart of the UpTrust mission—it's upstream of so many things happening societally right now.
Tim: Yeah, I'm rooting really hard for you guys. When you have this discussion, it highlights how important the company is. Some customers might just see it as a nicer social network. But if you zoom out, there are a bunch of forces dragging society down, and this is one that's trying to do the opposite at a time when it matters. Really a lot.
The Far Future: How Good Could It Get?
Jordan: With the futurism—have you come across people talking about what happens if we actually succeed? If something like UpTrust or Balaji-style network states are successful at creating a coherent planetary nervous system? What are the extreme far futures of humanity actually managing to coordinate?
Tim: The bad possibilities are easier, because there are just a few attractor states: permanent dictatorship, extinction, societal collapse that sets you back to the Stone Age. What's harder and more fun is how good things could get.
I think we could be living in a world where people look back at today and say, "I cannot believe people ever lived like that"—the way we'd look at peasants under a really oppressive dictatorship in the year 800. They'll travel anywhere in the world in four hours on jets faster than we can imagine. People will live as long as they want to. Some will choose not to take any interventions—the hippies and some others will say, "I want to die at 85, and that's my choice." Others will want to live to 3,000. Whether it's mastering biology to de-age cells and refresh them and back up consciousness, or a robot body, or uploading into a virtual world.
I could see the Fermi Paradox being resolved by really advanced species not showing signs because they're all living in virtual worlds. Living in the physical world is how they see living in caves—we don't live outside like our ancestors did; maybe they don't live in the physical world.
Cultivated meat—after a bunch of resistance and pushback—I think will sweep the world. Billions of people eating really healthy, affordable meat made of real animal cells, but not from an animal. AI, if it ended up well-aligned, is facilitating many of these things. Like the Culture novels by Iain Banks, where AI runs everything and we're happy about it—"Thank God we don't have to run the world anymore. It was like Lord of the Flies with a bunch of children trying to manage it."
No scarcity. Space habitats. People looking back at today: you still got sick? People died before they were ready? You lost loved ones before you were ready? Transportation was so slow? Distance still mattered? Pain, disease, poverty—just long gone. Climate change fears—cracked, solved, long ago.
And the craziest part is I don't think you have to go 250 years—like George Washington to today. Things are moving so quickly that if it's good, it's going to be good in our lifetimes. And if we don't see it, it's probably because things went really bad and we ended up in a nightmare—where instead of "Imagine people used to live in 2026," we'd say, "We didn't know how good we had it."
There are also people in the middle ground who think we'll slowly decline or slowly get better, and that 50 years from now will be a more technological—maybe more depressed—version of today's world. They could be right too. This could all be one big dot-com bubble.
Jordan: Super cool. I don't have any great final question—I never expected to end up doing long-form interviews. It just happened.
Tim: You're doing a lot of really amazing interviews. I like the little world you're building over there. It's great.
Jordan: Thank you. Really appreciate it. Super fun. I look forward to more.
Show Notes & References
People Mentioned
Tim Urban — Writer, illustrator, and co-founder of Wait But Why, a long-form, stick-figure-illustrated blog with over 600,000 subscribers. Author of What's Our Problem? A Self-Help Book for Societies (2023). His 2016 TED talk on procrastination is one of the most-watched TED talks in history with over 74 million views. Website: waitbutwhy.com
Jonathan Haidt — Social psychologist at NYU Stern School of Business, author of The Righteous Mind: Why Good People Are Divided by Politics and Religion (2012) and co-author of The Coddling of the American Mind (2018). Referenced here for his research on social media's impact on political discourse and how platform design changes (like buttons, retweets) shifted online culture.
Jon Ronson — Welsh journalist and author. Referenced for his book So You've Been Publicly Shamed (2015) and his TED talk on online shaming. He described early social media as a "humming place" that later became hostile.
David Christian — Historian, creator of the "Big History" framework, and author of Origin Story: A Big History of Everything (2018). Pioneered the academic field of Big History, which traces history from the Big Bang to the present. His TED talk has been widely viewed.
Ken Wilber — Philosopher and creator of Integral Theory, which maps stages of human development (egocentric, ethnocentric, world-centric, etc.). Author of A Theory of Everything (2000), Sex, Ecology, Spirituality (1995), and many others. Referenced by Jordan as offering a more complex framework similar to Tim's ladder model.
Steven Pinker — Cognitive psychologist at Harvard, author of Enlightenment Now (2018) and The Better Angels of Our Nature (2011). Referenced as representing the rational/objective worldview within the Integral framework discussion.
Iain Banks — Scottish novelist (1954–2013), author of the Culture series of science fiction novels, which depict a post-scarcity, AI-governed utopia. Notable titles include Consider Phlebas (1987), The Player of Games (1988), and Use of Weapons (1990). Referenced by Tim as a vivid fictional depiction of what a positive AI-aligned future could look like.
Brené Brown — Research professor and author known for her work on vulnerability, shame, and empathy. Her books include Daring Greatly (2012) and The Gifts of Imperfection (2010). Referenced in the discussion of how her distinction between guilt and shame has been taken too far by some to mean that shame is always bad.
Balaji Srinivasan — Entrepreneur, author of The Network State: How to Start a New Country (2022). Referenced by Jordan in discussing the concept of technology-enabled "network states" and planetary coordination.
Books Referenced
What's Our Problem? A Self-Help Book for Societies — Tim Urban (2023). Uses the "ladder" framework (high-rung vs. low-rung thinking) to analyze tribalism, political discourse, and the collective intelligence of societies. Features the concept of the "idea lab" vs. the "echo chamber" and the human "super-brain."
Origin Story: A Big History of Everything — David Christian (2018, Little, Brown and Company). Traces the history of the universe from the Big Bang to the present using the Big History framework.
A Theory of Everything: An Integral Vision for Business, Politics, Science, and Spirituality — Ken Wilber (2000, Shambhala). Introduces the Integral framework mapping stages of human development.
So You've Been Publicly Shamed — Jon Ronson (2015, Riverhead Books). Examines the culture of online shaming and its consequences.
The Culture series — Iain M. Banks (1987–2012). A series of science fiction novels depicting a post-scarcity civilization governed by benevolent artificial superintelligences ("Minds"). Key titles: Consider Phlebas, The Player of Games, Use of Weapons, Excession, Look to Windward.
The Network State: How to Start a New Country — Balaji Srinivasan (2022). Proposes technology-enabled governance structures.
Key Concepts & Topics
Big History — An academic field pioneered by David Christian that examines history on the largest possible timescales, from the Big Bang to the present, identifying key "thresholds" of increasing complexity: the origin of stars, the origin of life, multicellularity, the emergence of language and collective learning, etc.
The Fermi Paradox — The apparent contradiction between the high probability of extraterrestrial civilizations existing and the lack of evidence for them. Tim discusses the "Great Filter" hypothesis—the idea that there may be some extremely unlikely evolutionary step that almost no species passes, which would explain why we don't observe alien civilizations.
The Great Filter — A hypothesized barrier in the development of life that prevents most civilizations from becoming interstellar. Candidate barriers include the origin of life, the jump from prokaryotic to eukaryotic cells, multicellularity, and the development of intelligence or technology.
Emergent Properties — The phenomenon where complex systems exhibit properties that their individual components don't possess. Tim uses this to describe how consciousness emerges from non-conscious cells, and how collective intelligence emerges from individual humans.
Protein Micro-Machines — Tim's term for the molecular machinery within cells—enzymes, motor proteins, ribosomes, and other protein complexes that carry out cellular functions through purely physical and chemical processes, despite not being "alive" in themselves.
The Primitive Mind vs. The Higher Mind — Tim's framework from What's Our Problem? The "primitive mind" refers to evolved instincts for survival, reproduction, tribal belonging, and status-seeking. The "higher mind" is our capacity for reasoning, empathy, long-term thinking, and principled behavior. Social media, in Tim's view, was initially designed to engage the higher mind but was reconfigured (through likes, retweets, algorithmic feeds) to appeal to the primitive mind.
High-Rung vs. Low-Rung Thinking — Tim's "ladder" framework. High-rung thinking treats ideas like science experiments—testing them, being willing to change your mind, engaging with disagreement charitably. Low-rung thinking treats ideas like sacred dogma—enforcing conformity, dehumanizing dissenters, and being driven by tribal loyalty rather than principles.
The Idea Lab vs. The Echo Chamber — Tim's terms for high-rung and low-rung social environments. In an idea lab, disagreement is welcomed and even fun. In an echo chamber, dissent is punished and everyone drifts into unearned conviction.
The Super-Brain / Super-Organism — Tim's concept that human civilization functions as a collective intelligence greater than any individual. When functioning in "high-rung" mode, this super-organism can solve enormous problems. When overtaken by tribalism ("low-rung" mode), it becomes a "macro toddler" that destroys what took centuries to build.
The Law of Mad Science — The observation that it takes high-minded, collaborative thinking to create powerful technologies (nuclear weapons, AI), but it doesn't require that same level of wisdom to use or misuse them—meaning the artifacts of intelligence can be wielded by tribal or destructive forces.
Conviction-Knowledge Graph — Tim's visualization of the relationship between how strongly you believe something and how much evidence supports it. The goal is to stay on the "dotted line" where conviction matches knowledge. Echo chambers push people upward into unearned conviction (strongly held, weakly supported beliefs).
Attractor States — Tim's term for the few stable endpoints that civilizations might converge on: permanent dictatorship, extinction, societal collapse, or various forms of flourishing.
Cultivated Meat — Meat grown from real animal cells in a lab rather than from a slaughtered animal. Tim predicts this technology will eventually sweep the world, providing affordable, healthy meat without animal suffering.
The Fermi Paradox and Virtual Worlds — Tim's hypothesis that advanced civilizations might resolve the Fermi Paradox by living primarily in virtual worlds, making their physical presence in the universe invisible—analogous to how we moved from living outdoors to living indoors.
UpTrust — Jordan Myska Allen's trust-based social media platform, designed to algorithmically prioritize credibility and nuance over engagement bait. Tim describes it as "trying to reinvent health food" in a world addicted to social media junk food.
Relatefulness — A community and practice co-founded by Jordan Myska Allen. Referenced when Jordan discusses cultivating group cultures that allow for genuine disagreement.